299 research outputs found

    Multicontrast MRI reconstruction with structure-guided total variation

    Get PDF
    Magnetic resonance imaging (MRI) is a versatile imaging technique that allows different contrasts depending on the acquisition parameters. Many clinical imaging studies acquire MRI data for more than one of these contrasts---such as for instance T1 and T2 weighted images---which makes the overall scanning procedure very time consuming. As all of these images show the same underlying anatomy one can try to omit unnecessary measurements by taking the similarity into account during reconstruction. We will discuss two modifications of total variation---based on i) location and ii) direction---that take structural a priori knowledge into account and reduce to total variation in the degenerate case when no structural knowledge is available. We solve the resulting convex minimization problem with the alternating direction method of multipliers that separates the forward operator from the prior. For both priors the corresponding proximal operator can be implemented as an extension of the fast gradient projection method on the dual problem for total variation. We tested the priors on six data sets that are based on phantoms and real MRI images. In all test cases exploiting the structural information from the other contrast yields better results than separate reconstruction with total variation in terms of standard metrics like peak signal-to-noise ratio and structural similarity index. Furthermore, we found that exploiting the two dimensional directional information results in images with well defined edges, superior to those reconstructed solely using a priori information about the edge location.Engineering and Physical Sciences Research Council (Grant ID: EP/H046410/1)This is the final version of the article. It first appeared from Society for Industrial and Applied Mathematics via http://dx.doi.org/10.1137/15M1047325

    Joint Reconstruction for Multi-Modality Imaging with Common Structure

    Get PDF
    Imaging is a powerful tool being used in many disciplines such as engineering, physics, biology and medicine to name a few. Recent years have seen a trend that imaging modalities have been combined to create multi-modality imaging tools where different modalities acquire complementary information. For example, in medical imaging, positron emission tomography (PET) and magnetic resonance imaging (MRI) are combined to image structure and function of the human body. Another example is spectral imaging where each channel provides information about a different wave length, e.g. information about red, green and blue (RGB). Most imaging modalities do not acquire images directly but measure a quantity from which we can reconstruct an image. These inverse problems require a priori information in order to give meaningful solutions. Assumptions are often on the smoothness of the solution but other information is sometimes available, too. Many multi-modality images show a strong inter-channel correlation as they are acquired from the same anatomy in medical imaging or the same scenery in spectral imaging. However, images from different modalities are usually reconstructed separately. In this thesis we aim to exploit this correlation using the data from all modalities, that are present in the acquisition, in a joint reconstruction process with the assumption that similar structures in all channels are more likely. We propose a framework for joint reconstruction where modalities are coupled by additional information about the solution we seek. A family of priors -- called parallel level sets -- allows us to incorporate structural a priori knowledge into the reconstruction. We analyse the parallel level set priors in several aspects including their convexity and the diffusive flow generated by their variation. Several numerical examples in RGB colour imaging and in PET-MRI illustrate the gain of joint reconstruction and in particular of the parallel level set priors

    Faster PET reconstruction with non-smooth priors by randomization and preconditioning

    Get PDF
    Uncompressed clinical data from modern positron emission tomography (PET) scanners are very large, exceeding 350 million data points (projection bins). The last decades have seen tremendous advancements in mathematical imaging tools many of which lead to non-smooth (i.e. non-differentiable) optimization problems which are much harder to solve than smooth optimization problems. Most of these tools have not been translated to clinical PET data, as the state-of-the-art algorithms for non-smooth problems do not scale well to large data. In this work, inspired by big data machine learning applications, we use advanced randomized optimization algorithms to solve the PET reconstruction problem for a very large class of non-smooth priors which includes for example total variation, total generalized variation, directional total variation and various different physical constraints. The proposed algorithm randomly uses subsets of the data and only updates the variables associated with these. While this idea often leads to divergent algorithms, we show that the proposed algorithm does indeed converge for any proper subset selection. Numerically, we show on real PET data (FDG and florbetapir) from a Siemens Biograph mMR that about ten projections and backprojections are sufficient to solve the MAP optimisation problem related to many popular non-smooth priors; thus showing that the proposed algorithm is fast enough to bring these models into routine clinical practice

    Vector-Valued Image Processing by Parallel Level Sets

    Get PDF
    Vector-valued images such as RGB color images or multimodal medical images show a strong interchannel correlation, which is not exploited by most image processing tools. We propose a new notion of treating vector-valued images which is based on the angle between the spatial gradients of their channels. Through minimizing a cost functional that penalizes large angles, images with parallel level sets can be obtained. After formally introducing this idea and the corresponding cost functionals, we discuss their Gâteaux derivatives that lead to a diffusion-like gradient descent scheme. We illustrate the properties of this cost functional by several examples in denoising and demosaicking of RGB color images. They show that parallel level sets are a suitable concept for color image enhancement. Demosaicking with parallel level sets gives visually perfect results for low noise levels. Furthermore, the proposed functional yields sharper images than the other approaches in comparison

    Deep learning as optimal control problems: Models and numerical methods

    Get PDF
    We consider recent work of Haber and Ruthotto 2017 and Chang et al. 2018, where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We review the first order conditions for optimality, and the conditions ensuring optimality after discretisation. This leads to a class of algorithms for solving the discrete optimal control problem which guarantee that the corresponding discrete necessary conditions for optimality are fulfilled. The differential equation setting lends itself to learning additional parameters such as the time discretisation. We explore this extension alongside natural constraints (e.g. time steps lie in a simplex). We compare these deep learning algorithms numerically in terms of induced flow and generalisation ability

    Deep learning as optimal control problems

    Get PDF
    We briefly review recent work where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We report here new preliminary experiments with implicit symplectic Runge-Kutta methods. In this paper, we discuss ongoing and future research in this area

    Performance evaluation of MAP algorithms with different penalties, object geometries and noise levels

    Get PDF
    A new algorithm (LBFGS-B-PC) which combines ideas of two existing convergent reconstruction algorithms, relaxed separable paraboloidal surrogate (SPS) and limited-memory Broyden-Fletcher-Goldfarb-Shanno with boundary constraints (LBFGS-B), is proposed. Its performance is evaluated in terms of log-posterior value and regional recovery ratio. The results demonstrate the superior convergence speed of the proposed algorithm to relaxed SPS and LBFGS-B, regardless of the noise level, activity distribution, object geometry, and penalties

    Uniform acquisition modelling across PET imaging systems: unified scatter modelling

    Get PDF
    RIN factor of all samples used for Illumina sequencing. (PDF 225 kb

    Uniform acquisition modelling across PET imaging systems: Unified scatter modelling

    Get PDF
    © 2016 IEEE. PET imaging is an important tool commonly used for studying disease by research consortia which implement multi-centre studies to improve the statistical power of findings. The UK government launched the Dementias Platform UK to facilitate one of the world's largest dementia population study involving national centres equipped with state-of-the-art PET/MR scanners from two major vendors. However, the difference in PET detector technology between the two scanners involved makes the standardisation of data acquisition and image reconstruction necessary. We propose a new approach to PET acquisition system modelling across different PET systems and technologies, focusing in particular on unified scatter estimation across TOF (time-of-flight) and non-TOF PET systems. The proposed scatter modelling is fully 3D and voxel based, as opposed to the popular line-of-response driven methods. This means that for each emitting voxel an independent 3D scatter estimate is found, inherently preserving the necessary information for TOF calculations as well as accounting for the large axial field of view. With adequate sampling of the input images, the non-TOF scatter estimate is identical to the summed TOF estimates across TOF bins, without an additional computational cost used for the TOF estimation. The model is implemented using the latest NVIDA GPU CUDA platform, allowing finer sampling of image space which is more essential for accurate TOF modelling. The high accuracy of the proposed scatter model is validated using Monte Carlo simulations. The model is deployed in our stand-alone image reconstruction pipeline for the Biograph mMR scanner, demonstrating accurate 3D scatter estimates resulting in uniform reconstruction for a high statistics phantom scan

    Accelerating variance-reduced stochastic gradient methods

    Get PDF
    Funder: Gates Cambridge Trust (GB)AbstractVariance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. Such approaches rely on “negative momentum”, a technique for further variance reduction that is generally specific to the SVRG gradient estimator. In this work, we show for the first time that negative momentum is unnecessary for acceleration and develop a universal acceleration framework that allows all popular variance-reduced methods to achieve accelerated convergence rates. The constants appearing in these rates, including their dependence on the number of functions n, scale with the mean-squared-error and bias of the gradient estimator. In a series of numerical experiments, we demonstrate that versions of SAGA, SVRG, SARAH, and SARGE using our framework significantly outperform non-accelerated versions and compare favourably with algorithms using negative momentum.</jats:p
    • …
    corecore